Last week we mainly talked about rational agents as a framework in which I would like to place all the
algorithms that we're going to look at from now on. I'll try to bind this back into the agents,
but what you should think about is that, for instance, the search algorithms actually are the
deliberative component in such an agent. So if we find a solution, then we can actually work on
doing the actions that this plan suggests. And we have looked at a variety of agents, a variety
of environments they can act in. And the last thing we looked at, which is where we started on
Thursday, was looking at three fundamentally different ways of representing the environment.
Okay, so we looked at three of them. One is basically having atomic environment representations,
which is basically just giving an environment a name. Black box representation, we cannot look
into it, we cannot see or reason with the state of the environment. We only know this environment,
and this environment comes after that environment, essentially. The next thing is something we call
factored, where you kind of have a semi-transparent representations, where you actually have a couple
of slots, and these slots can have values. So you might know that you have an environment in which
the weather is foggy, okay? And we might have an environment where there's, I don't know, 53 students,
and so on. Okay, so you know certain things about the environment, but only things you can basically,
you have a finite number of attributes that you use in that environment description, which can have
values, and those you can reason about. And we will see that this allows a completely different
set of algorithms that perform better with than these atomic environments. If you think about this,
essentially every combination of values in such a factored representation corresponds to one
environment here. Okay, we kind of have to look at much fewer things, and that's something we'll
probably see this week. If you could look into these environments, you would have more guidance
of what to do next. Okay, so we're trading, essentially, complexity. This is wonderfully
simple, but we're trading complexity for numbers of environments and more guidance. That's essentially
where the algorithms will differ. And the third kind of environments we call structured, where you
basically have full flexibility in describing the environment. Human agents use all kinds of
environments for communication. You might fill out summary forms about certain things, but I might
describe my teaching environment as the course is, or if you just look at Univis, you have a
couple of slots there, when, where, who, for whom, prerequisites, course description, literature,
or something like this. That's a factored representation, where a fully structured
representation would be something where I would tell somebody a story. I have this AI course,
and could you imagine I have, last week, only had students who sat in the first three rows. Okay,
so there's not going to be, in Univis, a slot number of rows that contain the students today. Okay,
I would need a kind of a fairly general mechanisms to actually write that down. But it might be
important for certain things, and I think I had the example of the truck blocking the road because
there's a loose coward in a driveway or something like this. But those are not things you put into
forms where you fill out values or something.
Presenters
Zugänglich über
Offener Zugang
Dauer
00:06:17 Min
Aufnahmedatum
2020-10-27
Hochgeladen am
2020-10-27 14:17:24
Sprache
en-US
Recap: Representing the Environment in Agents
Main video on the topic in chapter 6 clip 6.